Name | Version | Summary | date |
---|---|---|---|
model-alignment | 0.1 | Model Alignment: Aligning prompts to human preferences through natural language feedback | 2024-05-13 14:23:12 |
inseq | 0.6.0 | Interpretability for Sequence Generation Models 🔍 | 2024-04-13 13:37:37 |
lit-nlp | 1.1.1 | 🔥LIT: The Learning Interpretability Tool | 2024-04-09 23:12:49 |
shapiq | 0.0.6 | SHAPley Interaction Quantification (SHAP-IQ) for Explainable AI | 2024-03-20 15:38:24 |
hour | day | week | total |
---|---|---|---|
37 | 1365 | 10557 | 210196 |